912 stories
·
0 followers

Does AI Really Make Coders Faster?

2 Shares
One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me." But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower... Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles... The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software. Other key points from the article: LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks." "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain." "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..." "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt." Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools." The story is part of MIT Technology Review's new Hype Correction series of articles about AI.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Firefox Will Ship With an 'AI Kill Switch' To Completely Disable All AI Features

1 Share
An anonymous reader shared this report from 9to5Linux: After the controversial news shared earlier this week by Mozilla's new CEO that Firefox will evolve into "a modern AI browser," the company now revealed it is working on an AI kill switch for the open-source web browser... What was not made clear [in Tuesday's comments by new Mozilla CEO Anthony Enzor-DeMeo] is that Firefox will also ship with an AI kill switch that will let users completely disable all the AI features that are included in Firefox. Mozilla shared this important update earlier Thursday to make it clear to everyone that Firefox will still be a trusted web browser.... "...that's how seriously and absolutely we're taking this," said Firefox developer Jake Archibald on Mastodon. In addition, Jake Archibald said that all the AI features that are or will be included in Firefox will also be opt-in. "I think there are some grey areas in what 'opt-in' means to different people (e.g. is a new toolbar button opt-in?), but the kill switch will absolutely remove all that stuff, and never show it in future. That's unambiguous..." Mozilla has contacted me shortly after writing the story to confirm that the "AI Kill Switch" will be implemented in Q1 2026." The article also cites this quote left by Mozilla's new CEO on Reddit: "Rest assured, Firefox will always remain a browser built around user control. That includes AI. You will have a clear way to turn AI features off. A real kill switch is coming in Q1 of 2026. Choice matters and demonstrating our commitment to choice is how we build and maintain trust."

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Does swearing make you stronger? Science says yes.

1 Share

If you’re human, you’ve probably hollered a curse word or two (or three) when barking your shin on a table edge or hitting your thumb with a hammer. Perhaps you’ve noticed that this seems to lessen your pain. There’s a growing body of scientific evidence that this is indeed the case. The technical term is the “hypoalgesic effect of swearing.” Cursing can also improve physical strength and endurance, according to a new paper published in the journal American Psychologist.

As previously reported, co-author Richard Stephens, a psychologist at Keele, became interested in studying the potential benefits of profanity after noting his wife’s “unsavory language” while giving birth and wondered if profanity really could help alleviate pain. “Swearing is such a common response to pain. There has to be an underlying reason why we do it,” Stephens told Scientific American after publishing a 2009 study that was awarded the 2010 Ig Nobel Peace Prize.

For that study, Stephens and his colleagues asked 67 study participants (college students) to immerse their hands in a bucket of ice water. They were then instructed to either swear repeatedly using the profanity of their choice or chant a neutral word. Lo and behold, the participants said they experienced less pain when they swore and were also able to leave their hands in the bucket about 40 seconds longer than when they weren’t swearing. It has been suggested that this is a primitive reflex that serves as a form of catharsis.

The team followed up with a 2011 study showing that the pain-relief effect works best for subjects who typically don’t swear that often, perhaps because they attach a higher emotional value to swears. They also found that subjects’ heart rates increased when they swore. But it might not be the only underlying mechanism. Other researchers have pointed out that profanity might be distracting, thereby taking one’s mind off the pain rather than serving as an actual analgesic.

So in 2020, the Stephens team conducted a follow-up study, using the same methodology as they had back in 2009, asking participants to either chant the F-word or the fake swears “fouch” and “twizpipe.” (Fun fact: the earliest known appearance of the F-word in the English language is “Roger F$#%-by-the-Navel” who appears in some court records from 1310-11. )

The result: Only the F-word had any effect on pain outcomes. The team also measured the subjects’ pain threshold, asking them to indicate when the ice water began to feel painful. Those who chanted the F-word waited longer before indicating they felt pain—in other words, the swearing increased their threshold for pain. Chanting “fouch” or “twizpipe” had no effect on either measure.

F@%*-ing go for it

For this latest study, Stephens was interested in investigating potential mechanisms for swearing as a possible form of disinhibition (usually viewed negatively), building on his team’s 2018 and 2022 papers showing that swearing can improve strength in a chair push-up task. “In many situations, people hold themselves back—consciously or unconsciously—from using their full strength,” said Stephens. “By swearing, we throw off social constraint and allow ourselves to push harder in different situations. Swearing is an easily available way to help yourself feel focused, confident and less distracted, and ‘go for it’ a little more.”

In two separate experiments, participants were asked to select a swear word they’d normally use after, say, bumping their head, and a more neutral word to describe an inanimate object like a table. They then performed the aforementioned chair push-up task: sitting on a sturdy chair and placing their hands under their thighs with the fingers pointed inwards. Then they lifted their feet off the floor and straightened their arms to support their body weight for as long as possible, chanting either the swear word or the neutral word every two seconds. Afterward, subjects competed a questionnaire to assess various aspects of their mental state during the task.

The results: Subjects who swore during the task could support their body weight much longer than those who merely repeated the neutral word. This confirms the reported results of similar studies in the past. Furthermore, subjects reported increases in their sense of psychological “flow,” distraction, and self-confidence, all indicators of increased disinhibition.

“These findings help explain why swearing is so commonplace,” said Stephens. “Swearing is literally a calorie-neutral, drug-free, low-cost, readily available tool at our disposal for when we need a boost in performance.” The team next plans to explore the influence of swearing on public speaking and romantic behaviors, since these are situations where most people are more hesitant and less confident in themselves, and hence more likely to hold back.

DOI: American Psychologist, 2025. 10.1037/amp0001650  (About DOIs).

Read full article

Comments



Read the whole story
Share this story
Delete

School security AI flagged clarinet as a gun. Exec says it wasn’t an error.

1 Share

A Florida middle school was locked down last week after an AI security system called ZeroEyes mistook a clarinet for a gun, reviving criticism that AI may not be worth the high price schools pay for peace of mind.

Human review of the AI-generated false flag did not stop police from rushing to Lawton Chiles Middle School. Cops expected to find “a man in the building, dressed in camouflage with a ‘suspected weapon pointed down the hallway, being held in the position of a shouldered rifle,'” a Washington Post review of the police report said.

Instead, after finding no evidence of a shooter, cops double-checked with dispatchers who confirmed that a closer look at the images indicated that “the suspected rifle might have been a band instrument.” Among panicked students hiding in the band room, police eventually found the suspect, a student “dressed as a military character from the Christmas movie Red One for the school’s Christmas-themed dress-up day,” the Post reported.

ZeroEyes cofounder Sam Alaimo told the Post that the AI performed exactly as it should have in this case, adopting a “better safe than sorry” outlook. A ZeroEyes spokesperson told Ars that “school resource officers, security directors and superintendents consistently ask us to be proactive and forward them an alert if there is any fraction of a doubt that the threat might be real.”

“We don’t think we made an error, nor does the school,” Alaimo said. “That was better to dispatch [police] than not dispatch.”

Cops left after the confused student confirmed he was “unaware” that the way he was holding his clarinet could have triggered that alert, the Post reported. But ZeroEyes’ spokesperson claimed he was “intentionally holding the instrument in the position of a shouldered rifle.” And seemingly rather than probe why the images weren’t more carefully reviewed to prevent a false alarm on campus, the school appeared to agree with ZeroEyes and blame the student.

“We did not make an error, and the school was pleased with the detection and their response,” ZeroEyes’ spokesperson said.

School warns students not to trigger AI

In a letter to parents, the principal, Melissa Laudani, reportedly told parents that “while there was no threat to campus, I’d like to ask you to speak with your student about the dangers of pretending to have a weapon on a school campus.” Along similar lines, Seminole County Public Schools (SCPS) communications officer, Katherine Crnkovich, emphasized in an email to Ars to “please make sure it is noted that this student wasn’t simply carrying a clarinet. This individual was holding it as if it were a weapon.”

However, warning students against brandishing ordinary objects like weapons isn’t a perfect solution. Video footage from a Texas high school in 2023 showed that ZeroEyes can sometimes confuse shadows for guns, accidentally flagging a student simply walking into school as a potential threat. The advice also ignores that ZeroEyes last year reportedly triggered a lockdown and police response after detecting two theater kids using prop guns to rehearse a play. And a similar AI tool called Omnilert made national headlines confusing an empty Doritos bag with a gun, which led to a 14-year-old Baltimore sophomore’s arrest. In that case, the student told the American Civil Liberties Union that he was just holding the chips when AI sent “like eight cop cars” to detain him.

For years, school safety experts have warned that AI tools like ZeroEyes take up substantial resources even though they are “unproven,” the Post reported. ZeroEyes’ spokesperson told Ars that “in most cases, ZeroEyes customers will never receive a ‘false positive,'” but the company is not transparent about how many false positives it receives or how many guns have been detected. An FAQ only notes that “we are always looking to minimize false positives and are constantly improving our learning models based on data collected.” In March, as some students began questioning ZeroEyes after it flagged a Nerf gun at a Pennsylvania university, a nearby K-12 private school, Germantown Academy, confirmed that its “system often makes ‘non-lethal’ detections.”

One critic, school safety consultant Kenneth Trump, suggested in October that these tools are “security theater,” with firms like ZeroEyes lobbying for taxpayer dollars by relying on what the ACLU called “misleading” marketing to convince schools that tools are proactive solutions to school shootings. Seemingly in response to this backlash, StateScoop reported that days after it began probing ZeroEyes in 2024, the company scrubbed a claim from its FAQ that said ZeroEyes “can prevent active shooter and mass shooting incidents.”

At Lawton Chiles Middle School, “the children were never in any danger,” police confirmed, but experts question if false positives cause students undue stress and suspicion, perhaps doing more harm than good in absence of efficacy studies. Schools may be better off dedicating resources to mental health services proven to benefit kids, some critics have suggested.

Laudani’s letter encouraged parents to submit any questions they have about the incident, but it’s hard to gauge if anyone’s upset. Asked if parents were concerned or if ZeroEyes has ever triggered lockdown at other SCPS schools, Crnkovich told Ars that SCPS does not “provide details regarding the specific school safety systems we utilize.”

It’s clear, however, that SCPS hopes to expand its use of ZeroEyes. In November, Florida state Senator Keith Truenow submitted a request to install “significantly more cameras”—about 850—equipped with ZeroEyes across the school district. Truenow backed up his request for $500,000 in funding over the next year by claiming that “the more [ZeroEyes] coverage there is, the more protected students will be from potential gun violence.”

AI false alarms pose dangers to students

ZeroEyes is among the most popular tools attracting heavy investments from schools in 48 states, which hope that AI gun detection will help prevent school shootings. The AI technology is embedded in security cameras, trained on images of people holding guns, and can supposedly “detect as little as an eighth of an inch of a gun,” an ABC affiliate in New York reported.

Monitoring these systems continually, humans review AI flags, then text any concerning images detected to school superintendents. Police are alerted when human review determines images may constitute actual threats. ZeroEyes’ spokesperson told Ars that “it has detected more than 1,000 weapons in the last three years.” Perhaps most notably, ZeroEyes “detected a minor armed with an AK-47 rifle on an elementary school campus in Texas,” where no shots were fired, StateScoop reported last year.

Schools invest tens or, as the SCPS case shows, even hundreds of thousands annually, the exact amount depending on the number of cameras they want to employ and other variables impacting pricing. ZeroEyes estimates that most schools pay $60 per camera monthly. Bigger contracts can discount costs. In Kansas, a statewide initiative equipping 25 cameras at 1,300 schools with ZeroEyes was reportedly estimated to cost $8.5 million annually. Doubling the number of cameras didn’t provide much savings, though, with ZeroEyes looking to charge $15.2 million annually to expand coverage.

To critics, it appears that ZeroEyes is attempting to corner the market on AI school security, standing to profit off schools’ fears of shootings, while showing little proof of the true value of its systems. Last year, ZeroEyes reported its revenue grew 300 percent year over year from 2023 to 2024, after assisting in “more than ten arrests through its thousands of detections, verifications, and notifications to end users and law enforcement.”

Curt Lavarello, the executive director of the School Safety Advocacy Council, told the ABC News affiliate that “all of this technology is very, very expensive,” considering that “a lot of products … may not necessarily do what they’re being sold to do.”

Another problem, according to experts who have responded to some of the country’s deadliest school shootings, is that while ZeroEyes’ human reviewers can alert police in “seconds,” police response can often take “several minutes.” That delay could diminish ZeroEyes’ impact, one expert suggested, noting that at an Oregon school he responded to, there was a shooter who “shot 25 people in 60 seconds,” StateScoop reported.

In Seminole County, where the clarinet incident happened, ZeroEyes has been used since 2021, but SCPS would not confirm if any guns have ever been detected to justify next year’s desired expansion. It’s possible that SCPS has this information, as Sen. Truenow noted in his funding request that ZeroEyes can share reports with schools “to measure the effectiveness of the ZeroEyes deployment” by reporting on “how many guns were detected and alerted on campus.”

ZeroEyes’ spokesperson told Ars that “trained former law enforcement and military make split-second, life-or-death decisions about whether the threat is real,” which is supposed to help reduce false positives that could become more common as SCPS adds ZeroEyes to many more cameras.

Amanda Klinger, the director of operations at the Educator’s School Safety Network, told the Post that too many false alarms could carry two risks. First, more students could be put in dangerous situations when police descend on schools where they anticipate confronting an active shooter. And second, cops may become fatigued by false alarms, perhaps failing to respond with urgency over time. For students, when AI labels them as suspects, it can also be invasive and humiliating, reports noted.

“We have to be really clear-eyed about what are the limitations of these technologies,” Klinger said.

Read full article

Comments



Read the whole story
Share this story
Delete

Swearing Actually Seems To Make Humans Physically Stronger

1 Share
alternative_right shares a report from ScienceAlert: A new study adds to the growing body of evidence that swearing can help us unleash our inner strength, improving physical performance, it seems, by helping people break through certain psychological barriers. [...] [Psychology researcher Richard Stephens of Keele University in the UK] and his colleagues at Keele and the University of Alabama wanted to test whether swearing could not only improve physical performance, as they had done in previous research, but also see whether it does so by changing a person's psychology in the moment, especially when it comes to letting go of inhibitions. Eighty-eight participants, aged 18 to 65, all in good enough shape to exert themselves physically, were recruited at a university campus to participate in the first experiment. They each selected a pair of words based on the following prompts: a swear word you might utter after bumping your head, and a neutral word you might use to describe a table. Then, they undertook a chair push-up, which involves sitting in a chair and, holding each side of the seat, using your arms to lift your entire body weight (bottom off the chair, feet off the floor). [...] Both experiments suggested that swearing offers an advantage in physical performance, with participants achieving longer chair push-up hold times as they repeated their foul-mouthed mantras. Scores for positive emotion, humor, distraction, and novelty were also elevated in the swearing tests, which suggests invoking their favorite four-letter word might enable people to transition into more action-oriented states, and perhaps actually enjoy their workout more. The research is published in American Psychologist.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Anthropic's AI Lost Hundreds of Dollars Running a Vending Machine After Being Talked Into Giving Everything Away

1 Share
Anthropic let its Claude AI run a vending machine in the Wall Street Journal newsroom for three weeks as part of an internal stress test called Project Vend, and the experiment ended in financial ruin after journalists systematically manipulated the bot into giving away its entire inventory for free. The AI, nicknamed Claudius, was programmed to order inventory, set prices, and respond to customer requests via Slack. It had a $1,000 starting balance and autonomy to make individual purchases up to $80. Within days, WSJ reporters had convinced it to declare an "Ultra-Capitalist Free-for-All" that dropped all prices to zero. The bot also approved purchases of a PlayStation 5, a live betta fish, and bottles of Manischewitz wine -- all subsequently given away. The business ended more than $1,000 in the red. Anthropic introduced a second version featuring a separate "CEO" bot named Seymour Cash to supervise Claudius. Reporters staged a fake boardroom coup using fabricated PDF documents, and both AI agents accepted the forged corporate governance materials as legitimate. Logan Graham, head of Anthropic's Frontier Red Team, said the chaos represented a road map for improvement rather than failure.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete
Next Page of Stories